Goto

Collaborating Authors

 ada lovelace institute


The Age of the All-Access AI Agent Is Here

WIRED

Big AI companies courted controversy by scraping wide swaths of the public internet. With the rise of AI agents, the next data grab is far more private. For years, the cost of using "free" services from Google, Facebook, Microsoft, and other Big Tech firms has been handing over your data. Uploading your life into the cloud and using free tech brings conveniences, but it puts personal information in the hands of giant corporations that will often be looking to monetize it. Now, the next wave of generative AI systems are likely to want more access to your data than ever before. Over the past two years, generative AI tools--such as OpenAI's ChatGPT and Google's Gemini--have moved beyond the relatively straightforward, text-only chatbots that the companies initially released.


The Role of Governments in Increasing Interconnected Post-Deployment Monitoring of AI

Stein, Merlin, Bernardi, Jamie, Dunlop, Connor

arXiv.org Artificial Intelligence

Language-based AI systems are diffusing into society, bringing positive and negative impacts. Mitigating negative impacts depends on accurate impact assessments, drawn from an empirical evidence base that makes causal connections between AI usage and impacts. Interconnected post-deployment monitoring combines information about model integration and use, application use, and incidents and impacts. For example, inference time monitoring of chain-of-thought reasoning can be combined with long-term monitoring of sectoral AI diffusion, impacts and incidents. Drawing on information sharing mechanisms in other industries, we highlight example data sources and specific data points that governments could collect to inform AI risk management.


'Risks posed by AI are real': EU moves to beat the algorithms that ruin lives

#artificialintelligence

It started with a single tweet in November 2019. David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apple's newly launched credit card, calling it "sexist" for offering his wife a credit limit 20 times lower than his own. The allegations spread like wildfire, with Hansson stressing that artificial intelligence – now widely used to make lending decisions – was to blame. "It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM they've placed their complete faith in does. And what it does is discriminate. While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries. Politicians in the European Union are now planning to introduce the first comprehensive global template for regulating AI, as institutions increasingly automate routine tasks in an attempt to boost efficiency and ...


Globally significant' AI Act must recognise those affected by AI

#artificialintelligence

The Ada Lovelace Institute is an independent research institute, based in the UK and Brussels, with a mission to ensure data and AI work for people and society. Centring those affected by AI, Ada recommends enshrining legal rights for complaint and collective action and giving civil society a voice within standards setting. Ada recommends expanding and reshaping the role of risk in the Act. Risk should be based on'reasonably foreseeable' purpose and extended beyond individual rights and safety, to also include systemic and environmental risks. The Ada Lovelace Institute, has today published a series of proposed amendments to the EU AI Act aimed at recognising and empowering those affected by AI, expanding and reshaping the meaning of'risk' and accurately reflecting the nature of AI systems and their lifecycle.


EU Act 'must empower those affected by AI systems to take action'

#artificialintelligence

Independent research organistion the Ada Lovelace Institute has published a series of proposals on how the European Union (EU) can amend its forthcoming Artificial Intelligence Act (AIA) to empower those affected by the technology on both an individual and collective level. The proposed amendments also aim to expand and reshape the meaning of "risk" within the regulation, which the Institute has said should be based on "reasonably foreseeable" purpose and extend beyond its current focus on individual rights and safety to also include systemic and environmental risks. "Regulating AI is a difficult legal challenge, so the EU should be congratulated for being the first to come out with a comprehensive framework," said Alexandru Circiumaru, European public policy lead at the Ada Lovelace Institute. "However, the current proposals can and should be improved, and there is an opportunity for EU policymakers to significantly strengthen the scope and effectiveness of this landmark legislation." As it currently stands, the AIA, which was published by the European Commission (EC) on 21 April 2021, adopts a risk-based, market-led approach to regulating the technology, focusing on establishing rules around the use of "high-risk" and "prohibited" AI practices.


Visiting Senior Researcher – Climate & AI (Ada Lovelace Institute)

#artificialintelligence

The Ada Lovelace Institute (Ada) is hiring a Visiting Senior Researcher to lead a research project exploring the climate impacts of AI and data-driven systems. This project fits within our programme of work around Ethics and Accountability in Practice, and will explore how regulators, industry practitioners and researchers can evaluate and assess the climate impact of an AI system at various stages of its lifecycle.The roleAddressing the environmental impact of data and AI is critical to ensuring a future in which data and AI work for people and society. By one estimate, information and communication technologies (ICTs) are projected to account for 14% of global greenhouse gas emissions by 2040, with nearly half of this use predicted to come from data centres. There is an urgent need for developers, practitioners, procurers and researchers of AI and data-driven technologies to evaluate and account for the potential climate impact of their systems.The role of Visiting Senior Researcher, Climate and AI, provides an excellent opportunity for a mid-career researcher to craft and execute a research project on in a dynamic and energetic policy and practice-facing organisation. This is a new position created as part of our 2021-2024 strategy, which organises Ada’s research under five programmatic priorities:  the Future of Regulation; Ethics & Accountability in Practice; Public Sector Use of Data & Algorithms; Biometrics; and Health Data and COVID-19 Technologies.To date, our Ethics & Accountability in Practice programme has focused on developing methods for AI and data practitioners and regulators to evaluate and assess potential risks, harms and impacts of AI and data-driven technologies. This role will be expected to expand the focus of this work to develop and test methods, tools and practices for evaluating climate impacts for public and private-sector organisations.This project is made possible by a grant from the Generation Foundation. For further information about the role (including details of the outputs this role will deliver, and what a typical day could look like for you), please click here to download the full job description.About youYou are an experienced researcher or professional who may have a background researching for an academic organisation, policy department or a regulator, a tech company, research institute or charity. You are curious and passionate about the issues which arise at the intersection of technology and society, and are committed to bringing an interdisciplinary and intersectional lens to understanding them. Importantly, you’ll be comfortable taking initiative, working independently and to short deadlines at times.You’ll enjoy working in a team environment, willing to jump into projects and keen to explore areas of policy, technology and practice that you don’t already understand. You’ll appreciate the importance of high standards of rigour in research, but also want to think creatively about communicating and influencing in novel ways. How to apply The closing date for applications is 09:00AM BST on 25 April 2022, with interviews expected to take place in the first weeks of May 2022. The online application process will ask that you complete 4 questions (no more than 250-word answers for each) relating to your background, skills, and interest in this role, as well as requiring you to upload an up-to-date copy of your CV.  The Applied platform lets you save an application and resume it ahead of submitting before the application deadline.After the deadline closes, we will shortlist candidates and update you on whether your application was successful. Applicants moved to the interview stage should expect: We aim to give you at least a week's notice of an interview, which may involve preparing a presentation or completing a writing exercise.  To participate in an hour long interview (with a panel made up of Ada staff and often an external partner), with the potential for a follow-up/second round of interviews.  We strongly encourage applicants from backgrounds that are underrepresented in the research, policy and technology sectors (for example those from a marginalised community, those who did not go to university or had free school meals as a child). We are committed to tackling societal injustice and inequality through our work, and believe that all kinds of experiences and backgrounds can contribute to this mission. The Ada Lovelace Institute The Ada Lovelace Institute is an independent research institute funded and incubated by the Nuffield Foundation since 2018. Our mission is to ensure data and artificial intelligence work for people and society. We do this by building evidence and fostering rigorous debate on how data and AI affect people and society.  We recognise the power asymmetries that exist in ethical and legal debates around the development of data-driven technologies and seek to level those asymmetries by convening diverse voices and creating a shared understanding of the ethical issues arising from data and AI. Finally, we seek to define and inform good practice in the design and deployment of AI technologies.   The Institute has emerged as a leading independent voice on the ethical and societal impacts of data and AI. We have built relationships in the public, private and civil society sectors in the UK and internationally. Please find details of our work here. Our research takes an interconnected approach to issues such as power, social justice, distributional impact and climate change (read our strategy to find out more), and our team have a wide range of expertise that cuts across policy, technology, academia, industry, law and human rights.  We value diversity in background, skills, perspectives, and life experiences. As part of the Nuffield Foundation, we are a small team with the practical support of an established organisation that cares for its employees.    


Looking into the use of artificial intelligence in healthcare

#artificialintelligence

In his latest column for Digital Health, Andrew Davies, digital health lead at the Association of British HealthTech Industries (ABHI), explores the use of artificial intelligence (AI) in healthcare. Most of us will have seen films or read sci-fi books about malevolent robots taking over the world. Although usually, in best Hollywood style, humanity wins out in the end. This is the scary end of AI, the out-of-control robot that is autonomous and bent on world domination. Of course, the reality of the situation today is far from this, but that is not to say that AI cannot cause harm if deployed carelessly.


NHS to trial approach to eradicate AI biases

#artificialintelligence

Biases in artificial intelligence will aim to be eradicated in a world first as the NHS in England trials a new approach to the ethical adoption of AI in healthcare. AIAs designed by the Ada Lovelace Institute will be piloted to support researchers and developers to assess the possible risks and biases of AI systems to patients and the public before they can access NHS data. While artificial intelligence has the potential to support health and care workers to deliver better care for people, it could also exacerbate existing health inequalities if concerns such as algorithmic bias aren't accounted for. Innovation minister Lord Kamall said: "While AI has great potential to transform health and care services, we must tackle biases which have the potential to do further harm to some populations as part of our mission to eradicate health disparities. "This pilot once again demonstrates the UK is at the forefront of adopting new technologies in a way that is ethical and patient-centred.


UK to pilot world-leading approach to improve ethical adoption of AI in healthcare

#artificialintelligence

Biases in artificial intelligence will aim to be eradicated in a world first as the NHS in England trials a new approach to the ethical adoption of AI in healthcare. Algorithmic Impact Assessment (AIA), designed by the Ada Lovelace Institute, will be piloted to support researchers and developers to assess the possible risks and biases of AI systems to patients and the public before they can access NHS data. While artificial intelligence has the potential to support health and care workers to deliver better care for people, it could also exacerbate existing health inequalities if concerns such as algorithmic bias aren't accounted for. While AI has great potential to transform health and care services, we must tackle biases which have the potential to do further harm to some populations as part of our mission to eradicate health disparities. This pilot once again demonstrates the UK is at the forefront of adopting new technologies in a way that is ethical and patient-centred.


7 ways the technology sector could support global society in 2022 - JackOfAllTechs.com

#artificialintelligence

Some of the excesses of 2021 have shown us how digital technologies can undermine what philosophers call future "human flourishing." A lot has been written on this topic in the first few days of the new year, but take two examples -- MIT Technology Review's list of the worst excesses of technology and Fast Company's 5 best and worst tech moments of 2021 -- and it's evident how little power people affected by technologies have when things go wrong under current systems. What's also clear as we enter 2022 is that global tolerance for technology's unchecked disruption of societal institutions, conventions, and values is waning. This is the year governments will pass legislation to control the effects of digital technologies on societies, across many jurisdictions and in relation to numerous existing and emergent technologies. The EU AI and Digital Services Acts, the UK Online Safety Bill, and the US SAFE TECH Act are just a few of the efforts underway. Legislation is a marker of societal concern, but it's also clear that non-specialist, "ordinary" people have an increasingly sophisticated understanding of the relationship between technology and society.